I didn’t engage with the different scenarios because they weren’t related to my cruxes for the argument; I don’t expect the nature of human intelligence to give us much insight into the nature of machine intelligence in the same way I don’t expect bird watching to help you identify different models of airplane.
I think there are good reasons to believe that safety focused AI/ML research will have strong payoffs now; in particular the flash crash example I linked above and things like self-driving car ethics are current practical problems—I do think this is very different from what MIRI does for example, and I’m not confident that I’d endorse MIRI as an EA cause (though I expect their work to be valuable). But there is a ton of unethical machine learning going on right now and I expect both that substantial problems that already exist can be addressed by ML safety as well as that research contributing to both the social position and theoretical development of AI safety in the future.
In a sense, I don’t feel like we’re entirely in a dark cave. We’re in a cave with a bunch of glowing mushrooms, and they’re sort of going in a line, and we can try following that line in both directions because there are reasons to think that’ll lead us out of the tunnel. It might also be interesting to study the weird patterns they make along the way but I think that requires a better outside view argument, and the patterns they make when we leave the tunnel have a good chance of being totally different. Sorry if that metaphor got away from me.
Ah, okay. I’m not sure you are my audience, or whether EAs are in that case.
I’m interested in the people that care about take off and want to get more information about it. It seems that a more general AI is needed for programming (at least to do programming better than Genetic Programming etc). And only that will lead to take off. The only general intelligence we know is human.
I don’t expect the nature of human intelligence to give us much insight into the nature of machine intelligence in the same way I don’t expect bird watching to help you identify different models of airplane.
I hope that studying birds might give you insight into the nature of lift and aerodynamics. Which would help with airplanes. Actually it turns out the wright brother’s got inspiration for the turning mechanism from birds
The Wrights developed their wing warping theory in the summer of 1899 after observing the buzzards at Pinnacle Hill twisting the tips of their wings as they soared into the wind.
The Wrights made the right decision by focusing on large birds. It turns out that small birds don’t change the shape of their wings when flying, rather they change the speed of their flapping wings. For example, to start a left turn, the right wing is flapped more vigorously.
I agree that ML safety will be useful in the short term as we integrate ML systems into our day to day lives.
I work on a simple system that could one day be more general, so I am very interested in getting more details about general-ness in intelligence and what is possible before it gets to that stage.
I didn’t engage with the different scenarios because they weren’t related to my cruxes for the argument; I don’t expect the nature of human intelligence to give us much insight into the nature of machine intelligence in the same way I don’t expect bird watching to help you identify different models of airplane.
I think there are good reasons to believe that safety focused AI/ML research will have strong payoffs now; in particular the flash crash example I linked above and things like self-driving car ethics are current practical problems—I do think this is very different from what MIRI does for example, and I’m not confident that I’d endorse MIRI as an EA cause (though I expect their work to be valuable). But there is a ton of unethical machine learning going on right now and I expect both that substantial problems that already exist can be addressed by ML safety as well as that research contributing to both the social position and theoretical development of AI safety in the future.
In a sense, I don’t feel like we’re entirely in a dark cave. We’re in a cave with a bunch of glowing mushrooms, and they’re sort of going in a line, and we can try following that line in both directions because there are reasons to think that’ll lead us out of the tunnel. It might also be interesting to study the weird patterns they make along the way but I think that requires a better outside view argument, and the patterns they make when we leave the tunnel have a good chance of being totally different. Sorry if that metaphor got away from me.
Ah, okay. I’m not sure you are my audience, or whether EAs are in that case.
I’m interested in the people that care about take off and want to get more information about it. It seems that a more general AI is needed for programming (at least to do programming better than Genetic Programming etc). And only that will lead to take off. The only general intelligence we know is human.
I hope that studying birds might give you insight into the nature of lift and aerodynamics. Which would help with airplanes. Actually it turns out the wright brother’s got inspiration for the turning mechanism from birds
I agree that ML safety will be useful in the short term as we integrate ML systems into our day to day lives.
I work on a simple system that could one day be more general, so I am very interested in getting more details about general-ness in intelligence and what is possible before it gets to that stage.